Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Zimbra Zero-Day Exploit Used in ICS File Attacks to Steal Sensitive Data

 

Security researchers have discovered that hackers exploited a zero-day vulnerability in Zimbra Collaboration Suite (ZCS) earlier this year using malicious calendar attachments to steal sensitive data. The attackers embedded harmful JavaScript code inside .ICS files—typically used to schedule and share calendar events—to target vulnerable Zimbra systems and execute commands within user sessions. 

The flaw, identified as CVE-2025-27915, affected ZCS versions 9.0, 10.0, and 10.1. It stemmed from inadequate sanitization of HTML content in calendar files, allowing cybercriminals to inject arbitrary JavaScript code. Once executed, the code could redirect emails, steal credentials, and access confidential user information. Zimbra patched the issue on January 27 through updates (ZCS 9.0.0 P44, 10.0.13, and 10.1.5), but at that time, the company did not confirm any active attacks. 

StrikeReady, a cybersecurity firm specializing in AI-based threat management, detected the campaign while monitoring unusually large .ICS files containing embedded JavaScript. Their investigation revealed that the attacks began in early January, predating the official patch release. In one notable instance, the attackers impersonated the Libyan Navy’s Office of Protocol and sent a malicious email targeting a Brazilian military organization. The attached .ICS file included Base64-obfuscated JavaScript designed to compromise Zimbra Webmail and extract sensitive data. 

Analysis of the payload showed that it was programmed to operate stealthily and execute in asynchronous mode. It created hidden fields to capture usernames and passwords, tracked user actions, and automatically logged out inactive users to trigger data theft. The script exploited Zimbra’s SOAP API to search through emails and retrieve messages, which were then sent to the attacker every four hours. It also added a mail filter named “Correo” to forward communications to a ProtonMail address, gathered contacts and distribution lists, and even hid user interface elements to avoid detection. The malware delayed its execution by 60 seconds and only reactivated every three days to reduce suspicion. 

StrikeReady could not conclusively link the attack to any known hacking group but noted that similar tactics have been associated with a small number of advanced threat actors, including those linked to Russia and the Belarusian state-sponsored group UNC1151. The firm shared technical indicators and a deobfuscated version of the malicious code to aid other security teams in detection efforts. 

Zimbra later confirmed that while the exploit had been used, the scope of the attacks appeared limited. The company urged all users to apply the latest patches, review existing mail filters for unauthorized changes, inspect message stores for Base64-encoded .ICS entries, and monitor network activity for irregular connections. The incident highlights the growing sophistication of targeted attacks and the importance of timely patching and vigilant monitoring to prevent zero-day exploitation.

Rise of Evil LLMs: How AI-Driven Cybercrime Is Lowering Barriers for Global Hackers

 

As artificial intelligence continues to redefine modern life, cybercriminals are rapidly exploiting its weaknesses to create a new era of AI-powered cybercrime. The rise of “evil LLMs,” prompt injection attacks, and AI-generated malware has made hacking easier, cheaper, and more dangerous than ever. What was once a highly technical crime now requires only creativity and access to affordable AI tools, posing global security risks. 

While “vibe coding” represents the creative use of generative AI, its dark counterpart — “vibe hacking” — is emerging as a method for cybercriminals to launch sophisticated attacks. By feeding manipulative prompts into AI systems, attackers are creating ransomware capable of bypassing traditional defenses and stealing sensitive data. This threat is already tangible. Anthropic, the developer behind Claude Code, recently disclosed that its AI model had been misused for personal data theft across 17 organizations, with each victim losing nearly $500,000. 

On dark web marketplaces, purpose-built “evil LLMs” like FraudGPT and WormGPT are being sold for as little as $100, specifically tailored for phishing, fraud, and malware generation. Prompt injection attacks have become a particularly powerful weapon. These techniques allow hackers to trick language models into revealing confidential data, producing harmful content, or generating malicious scripts. 

Experts warn that the ability to override safety mechanisms with just a line of text has significantly reduced the barrier to entry for would-be attackers. Generative AI has essentially turned hacking into a point-and-click operation. Emerging tools such as PromptLock, an AI agent capable of autonomously writing code and encrypting files, demonstrate the growing sophistication of AI misuse. According to Huzefa Motiwala, senior director at Palo Alto Networks, attackers are now using mainstream AI tools to compose phishing emails, create ransomware, and obfuscate malicious code — all without advanced technical knowledge. 

This shift has democratized cybercrime, making it accessible to a wider and more dangerous pool of offenders. The implications extend beyond technology and into national security. Experts warn that the intersection of AI misuse and organized cybercrime could have severe consequences, particularly for countries like India with vast digital infrastructures and rapidly expanding AI integration. 

Analysts argue that governments, businesses, and AI developers must urgently collaborate to establish robust defense mechanisms and regulatory frameworks before the problem escalates further. The rise of AI-powered cybercrime signals a fundamental change in how digital threats operate. It is no longer a matter of whether cybercriminals will exploit AI, but how quickly global systems can adapt to defend against it. 

As “evil LLMs” proliferate, the distinction between creative innovation and digital weaponry continues to blur, ushering in an age where AI can empower both progress and peril in equal measure.

Agentic AI Demands Stronger Digital Trust Systems

 

As agentic AI becomes more common across industries, companies face a new cybersecurity challenge: how to verify and secure systems that operate independently, make decisions on their own, and appear or disappear without human involvement. 

Consider a financial firm where an AI agent activates early in the morning to analyse trading data, detect unusual patterns, and prepare reports before the markets open. Within minutes, it connects to several databases, completes its task, and shuts down automatically. This type of autonomous activity is growing rapidly, but it raises serious concerns about identity and trust. 

“Many organisations are deploying agentic AI without fully thinking about how to manage the certificates that confirm these systems’ identities,” says Chris Hickman, Chief Security Officer at Keyfactor. 

“The scale and speed at which agentic AI functions are far beyond what most companies have ever managed.” 

AI agents are unlike human users who log in with passwords or devices tied to hardware. They are temporary and adaptable, able to start, perform complex jobs, and disappear without manual authentication. 

This fluid nature makes it difficult to manage digital certificates, which are essential for maintaining trusted communication between systems. 

Greg Wetmore, Vice President of Product Development at Entrust, explains that AI agents act like both humans and machines. 

“When an agent logs into a system or updates data, it behaves like a human user. But when it interacts with APIs or cloud platforms, it looks more like a software component,” he says. 

This dual behaviour requires a flexible security model. AI agents need stable certificates that prove their identity and temporary credentials that control what they are allowed to do. 

These permissions must be revocable in real time if the system behaves unexpectedly. The challenge becomes even greater when AI agents begin interacting with each other. Without proper cryptographic controls, one system could impersonate another. 

“Once agents start sharing information, certificate management becomes absolutely essential,” Hickman adds. 

Complicating matters further, three major changes are hitting cryptography at once. Certificate lifespans are being shortened to 47 days, post-quantum algorithms are nearing adoption, and organisations must now manage a far larger number of certificates due to AI automation. 

“We’re seeing huge changes in cryptography after decades of stability,” Hickman notes. “It’s a lot to handle for many teams.” 

Keyfactor’s research reveals that almost half of all organisations have not begun preparing for post-quantum encryption, and many still lack a clearly defined role for managing cryptography. 

This lack of governance poses serious risks, especially when certificate management is handled by IT departments without deep security expertise. Still, experts believe the situation can be managed with existing tools. 

“Agentic AI fits well within established security models such as zero trust,” Wetmore explains. “The technology to issue strong identities, enforce policies, and limit access already exists.” 

According to Sebastian Weir, AI Practice Leader at IBM UK and Ireland, many companies are now focusing on building security into AI projects from the start. 

“While AI development can be up to four times faster, the first version of code often contains many more vulnerabilities...” 

“...Organisations are learning to consider security early instead of adding it later,” he says.

Financial institutions are among those leading the shift, building identity systems that blend the stability of long-term certificates with the flexibility of short-term authorisations. 

Hickman points out that Public Key Infrastructure (PKI) already supports similar scale in IoT environments, managing billions of certificates worldwide. 

He adds, “PKI has always been about scale. The same principles can support agentic AI if implemented properly.” The real focus now, according to experts, should be on governance and orchestration. 

“Scalability depends on creating consistent and controllable deployment patterns. Orchestration frameworks and governance layers ensure transparency and auditability," says Weir. 

Poorly managed AI agents can cause significant damage. Some have been known to delete vital data or produce false financial information due to misconfiguration.

This makes it critical for companies to monitor agent behaviour closely and apply zero-trust principles where every interaction is verified. 

Securing agentic AI does not require reinventing cybersecurity. It requires applying proven methods to a new, fast-moving environment. 

“We already know that certificates and PKI work. An AI agent can have one certificate for identity and another for authorisation. The key is in how you manage them,” Hickman concludes. 

As businesses accelerate their use of AI, the winners will be those that design trust into their systems from the beginning. By investing in certificate lifecycle management and clear governance, they can ensure that every AI agent operates safely and transparently. Those who ignore this step risk letting their systems act autonomously in the dark, without the trust and control that modern enterprises demand.

Illumio Report Warns: Lateral Movement, Not Breach Entry, Causes the Real Cybersecurity Damage

 

In most cyberattacks, the real challenge doesn’t begin at the point of entry—it starts afterward. Once cybercriminals infiltrate a system, they move laterally across networks, testing access points, escalating privileges, and expanding control until a small breach becomes a full-scale compromise. Despite decades of technological progress, the core lesson remains: total prevention is impossible, and it’s the spread of an attack that does the deepest damage.

Illumio’s 2025 Global Cloud Detection and Response Report echoes this reality. Although many organizations claim to monitor east-west traffic and hybrid communications, few possess the contextual clarity to interpret the data effectively. Collecting logs and flow metrics is easy; understanding which workloads interact—and whether that interaction poses a risk—is where visibility breaks down.

Illumio founder and CEO Andrew Rubin highlighted this disconnect: “Everybody loves to say that we’ve got a data or a telemetry problem. I actually think that may be the biggest fallacy of all. We have more data and telemetry than we’ve ever had. The problem is we haven’t figured out how to use it in a highly efficient, highly effective way.”

The report reveals how overwhelmed security teams are by alert fatigue. Thousands of daily notifications—many of them false positives—leave analysts sifting through noise, hoping to identify the few signals that matter. Some describe it as “alert triage roulette,” where the odds of catching a genuine attack indicator are slim.

This inefficiency is costly. Missed alerts lead to prolonged downtime and severe financial losses. Rubin stressed that attackers often stay hidden for months: “Attackers are getting in. They’re literally moving into our house and living with us for months, totally undetected. That means we’re flying blind.”

Despite the adoption of advanced tools like CDR, NDR, XDR, SIEM, and SOAR, blind spots persist. The cybersecurity industry keeps adding layers of detection, but without correlation and context, more data simply amplifies the noise.

Shifting the Security Focus

The narrative now needs to move from “more detection” to “greater observability and containment.” Observability provides enriched context—who’s accessing what, from where, and how critical it is—across clouds and data centers, visualizing potential attack paths and blast radii. Containment acts on that insight, ideally through automation, to isolate or block threats before they escalate.

Rubin summarized it succinctly: “If you want to limit the blast radius of an attack, there are only two things you can do: find it quickly, and segment the environment. They are the only controls that help.”

Heading into 2026, organizations are prioritizing AI and machine learning integration, cloud detection and response, and faster incident remediation. As Rubin noted, AI is transforming both defense and offense in cybersecurity: “AI is going to be a tool in the hands of both the defenders and the attackers forever. In the short term, the advantage probably goes to those who operate outside the rule of law. The one thing we can do to combat that is better observability and finding things faster than we have in the past.”

Ultimately, the report reinforces one truth: visibility without understanding is useless. Companies that convert visibility into context, and context into containment, will stay ahead. In cybersecurity, speed and clarity will always triumph over noise and volume.

Microsoft Stops Phishing Scam Which Used Gen-AI Codes to Fool Victims


AI: Boon or Curse?

AI code is in use across sectors for variety of tasks, particularly cybersecurity, and both threat actors and security teams have turned to LLMs for supporting their work. 

Security experts use AI to track and address to threats at scale as hackers are experimenting with AI to make phishing traps, create obfuscated codes, and make spoofed malicious payloads. 

Microsoft Threat Intelligence recently found and stopped a phishing campaign that allegedly used AI-generated code to cover payload within an SVG file. 

About the campaign 

The campaign used a small business email account to send self addressed mails with actual victims coveted in BCC fields, and the attachment looked like a PDF but consisted SVG script content. 

The SVG file consisted hidden elements that made it look like an original business dashboard, while a secretly embedded script changed business words into code that exposed a secret payload. Once opened, the file redirects users to a CAPTCHA gate, a standard social engineering tactical that leads to a scanned sign in page used to steal credentials. 

The hidden process combined business words and formulaic code patterns instead of cryptographic techniques. 

Security Copilot studied the file and listed markers in lines with LLM output. These things made the code look fancy on the surface, however, it made the experts think it was AI generated. 

Combating the threat

The experts used AI powered tools in Microsoft Defender for Office 375 to club together hints that were difficult for hackers to push under the rug. 

The AI tool flagged the rare self-addressed email trend , the unusual SVG file hidden as a PDF, the redirecting to a famous phishing site, the covert code within the file, and the detection tactics deployed on the phishing page. 

The incident was contained, and blocked without much effort, mainly targeting US based organizations, Microsoft, however, said that the attack show how threat actors are aggressively toying with AI to make believable tracks and sophisticated payloads.

Andesite AI Puts Human Analysts at the Center of Cybersecurity Innovation

 

Andesite AI Inc., a two-year-old cybersecurity startup, is reimagining how human expertise and artificial intelligence can work together to strengthen digital defense. Founded by former CIA officers Brian Carbaugh and William MacMillan, the company aims to counter a fragmented cybersecurity landscape that prioritizes technology over the people who operate it. Carbaugh, who spent 24 years at the CIA leading its Global Covert Action unit, said his experience showed him both the power and pitfalls of a technology-first mindset. He noted that true security efficiency comes when teams have seamless access to information and shared intelligence — something still missing in most cybersecurity ecosystems.  

MacMillan, Andesite’s chief product officer, echoed that sentiment. After two decades at the CIA and a leadership role at Salesforce Inc., he observed that Silicon Valley’s focus on building flashy “blinky boxes” has often ignored the needs of cybersecurity operators. He believes defenders should be treated like fighter pilots of the digital age — skilled professionals equipped with the best possible systems, not burdened by cumbersome tools and burnout. 

As generative AI becomes a double-edged sword in cybersecurity, the founders warn that attackers are increasingly using AI to automate exploits and identify vulnerabilities faster than ever. MacMillan cautioned that “the weaponization of gen AI by bad actors is going to be gnarly,” emphasizing the need for defense teams to be equally equipped and adaptable. 

To meet this challenge, Andesite AI has designed a platform that centers on human decision-making. Instead of replacing staff, it provides a “decision layer” that connects with an organization’s existing security tools, harmonizes data, and uses what MacMillan calls “evidentiary AI.” This system explains its reasoning as it correlates alerts, prioritizes threats, and recommends next steps, offering transparency that traditional AI systems often lack. The software can be deployed flexibly — from SaaS models to secure on-premises environments — ensuring adaptability across industries. 

By eliminating the need for analysts to switch between multiple dashboards or write complex queries, Andesite’s technology allows staff to engage with the system in natural language. Analysts can ask questions and receive context-rich insights in real time. The company claims that one workflow, previously requiring 1,000 analyst hours, was reduced to under three minutes using its platform. 

Backed by $38 million in funding, including a $23 million round led by In-Q-Tel Inc., Andesite AI’s client base spans government agencies and private enterprises. Named after a durable igneous rock, the startup plans to expand beyond its AI for Security Operations Centers into areas like fraud detection and risk management. For now, Carbaugh says their focus remains on “delivering absolute white glove excellence” to early adopters as they redefine how humans and AI collaborate in cybersecurity.

Security Experts Warn of Audio Leakage Through Gaming Mice

 


A startling discovery has been made in a study by researchers at UCI, which pertains to a rare side-channel risk associated with high-performance optical mice. The study found that the sensors and polling rates that enable precision can also be used as clandestine acoustic detectors.

Known as Mic-E-Mouse, the technique involves reconstructing nearby speech from the minute vibrations that are recorded by sensors in mice with a DPI rating over 20,000; by applying advanced signal-processing pipelines and machine-learning enhancements, the research team proved that recognizable speech and intelligible audio could be recovered from raw data collected by mice packets. 

A critical aspect of the attack is that it requires only a vulnerability on the host computer that can be accessed through the use of high-frequency mouse readings-a capability readily found in many creative applications, games, and even seemingly benign web interfaces-before the harvested packets can be exfiltrated and processed off-site using the exploitation of high-frequency mouse readings. 

Considering that top-tier gaming mice have become increasingly affordable, the findings highlight a widening attack surface in everyday consumer hardware and underscore how manufacturers and security teams must consider reevaluating their assumptions about peripheral trust and data exposure for everyday consumer hardware. 

According to a recent study published by a team of researchers at the University of California, Irvine, the modern high DPI optical sensors - designed for flawless precision in gaming and creative applications - can actually act as sophisticated listening devices inadvertently. 

 As a result of the “Mic-E-Mouse” experiment, it was discovered that these sensors, particularly those with a resolution exceeding 20,000 DPI, have been found to be capable of detecting imperceptible desk vibrations induced by nearby speech and to reconstruct audio under controlled conditions with a rate of 42 to 61 percent accuracy by combining advanced signal processing and neural network models. 

There is no need to install malicious software or acquire administrative privileges for this exploitation, unlike traditional surveillance methods. Almost any legitimate application that can access mouse data in high frequency – such as games, design tools, or even routine productivity tools – can be used to harvest raw sensor readings by using high-frequency mouse data. 

It is possible to transmit these data streams off-site for audio reconstruction without alerting the user, so that they can appear indistinguishable from regular input traffic. What makes this discovery particularly troubling is that it is easily accessible to anyone: gaming mice are now available for a price of under thirty dollars, resulting in a technology that is able to sit innocuously on millions of desks around the world. 

In many cases, these devices, once trusted to enhance precision and performance, may now, unknowingly, be used as channels of covert eavesdropping - changing the very devices designed to maximize digital efficiency into instruments of eavesdropping. It is the responsibility of Habib Fakih, Rahul Dharmaji, Youssef Mahmoud, Halima Bouzidi, and Mohammad Abdullah Al Faruque, a team from the Department of Electrical Engineering and Computer Science at the University of California, Irvine, to a detailed study published on arXiv on September 16, 2025, that outlines the technical framework that underpins this unconventional method of eavesdropping. 

It was developed by the researchers that they could convert shifting, seemingly random data associated with mouse movements into discernible audio signals by using a sophisticated, multi-phase pipeline. A significant improvement in signal clarity of +19 dB was achieved by systematically filtering noise and reconstructed speech patterns through advanced signal processing and machine learning algorithms. Speech recognition accuracy ranged between 42% and 61% across standard speech datasets, with the system performing systematically filtering noise, reconstructing speech patterns, and regenerating speech patterns. 

In particular, what makes this attack especially insidious is that it is straightforward: you do not have to install malware, escalate privileges, or use complex intrusion techniques. This method requires merely access to high-frequency mouse data, which is usually obtained through legitimate applications such as creative software or gaming platforms that require real-time input from the user. 

It is almost impossible to differentiate the entire data collection process from normal mouse activity in the background, which is completely undetectable, while the audio reconstruction can take place remotely on an attacker's server, which is completely invisible in the background. It is crucial that hardware manufacturers introduce safeguards against this novel form of exploitation to prevent this form of exploitation from taking place in the future, as demonstrated by a video proof-of-concept released by the research team. 

 According to the researchers, the implications of this study go beyond the lab as well—widely available high-DPI mouse products at affordable prices mean millions of devices in homes and offices could inadvertently become surveillance tools. It is clear from these findings that technological advancements often come with unforeseen vulnerabilities, which highlights how technological advancement can often lead to unexpected failures. 

It is a multi-stage system which uses subtle desk vibrations to translate normal mouse sensor data into audible speech through a multi-stage process. It was designed by the researchers to collect non-uniform motion data from high-definition (DPI) sensors, then to apply advanced signal processing techniques like Wiener filtering to suppress noise and isolate meaningful vibration patterns based on this data. 

An artificial neural network that is trained on existing speech datasets reconstructs intelligible audio from these filtered signals, thereby increasing the signal-to-noise ratio by as much as 19 decibels in controlled test environments. The researchers also discovered that the effectiveness of the attack was heavily influenced by the environment. 

Softer material surfaces, such as paper or plastic, proved to transmit vibrations more effectively than denser materials, such as thick cardboard or rigid desks, while the most accurate results were achieved with normal conversational speech levels from 60 to 80 decibels. In the paper’s appendix, 26 models of mouse – which cost between $35 and $350 – have been identified as vulnerable to this type of exploitation as they continue to push for higher sensor precision at lower costs. 

While the potential exposure to these sensors does extend beyond individuals, there are increasing risks that can be posed to corporations, government, and military organizations. According to the researchers, Mic-E-Mouse is a vector within a larger threat model of data exfiltration. In order to protect against this threat, defenders need to consider a combination of technical and procedural countermeasures. 

These measures include limiting high-frequency polling rates in enterprise software, monitoring applications that transmit raw HID telemetry, implementing tight policies regarding endpoints and USB drives, and installing vibration-damping surfaces at sensitive areas. As part of their advocacy, they suggest collaborating with hardware vendors in order to introduce firmware-level randomization, as well as better API documentation, to prevent unauthorized high-frequency sampling from happening.

The study reinforces the conclusion of a critical security study: the physical environment becomes a potential data channel as consumer sensors become more sensitive, which modern security architectures need to be able to counter. Researchers at UC Irvine created an experiment where they captured raw, noisy motion data from a high-DPI optical mouse sensor while simultaneously replaying speech in order to test the sensor's ability to detect vibration-based acoustic signals. 

A number of factors contributed to the low quality of the initial data traces, including non-uniform sampling, quantization errors, and frequency limitations inherent in consumer hardware systems. Using machine-learning methods in combination with filters that remove background noise, correct any inconsistencies in the sampling process, and utilize sampling inconsistencies to reconstruct distinct audio signals, the researchers were able to overcome these challenges. 

There has been a significant improvement in signal quality, with gains of up to +19 decibels, as well as speech recognition performances that are capable of extracting meaningful phrases and context-a significant advantage for the intelligence community as well as privacy officials. 

 An interesting aspect of this exploit is that it does not require the access to privileged permissions or operating system audio interfaces; it just requires the ability to read and transmit HID packet data, a feature that a lot of legitimate applications already do. Because of this vulnerability, a wide range of environments are potentially vulnerable, from corporate offices to government workstations to home computers, and it can affect a wide range of environments. 

A high-fidelity mouse on a desk could allow you to reconstruct conversations taking place at a desk where there was a high-fidelity mouse, for example, confidential meetings, strategic discussions, or private calls, without having to activate the microphone at all. A number of security experts argue that Mic-E-Mouse is essentially an extension of data exfiltration risk, which necessitates layered defenses. 

As mitigations, risks should be reduced by limiting high-frequency pointer polling in enterprise software, monitoring raw HID traffic coming out of endpoints, tightening endpoint protection controls, and enforcing strict controls on USB device usage. A physical precaution is the use of vibration-damping mouse pads, and the use of peripherals with a lower DPI in sensitive areas to reduce the risk of exposure. 

It is also recommended that manufacturers implement firmware-level randomization and greater API transparency, which allows operating systems to mediate high-frequency data requests by implementing firmware-level randomization. Having said that, the study emphasizes that this is an important part of a wider concern regarding cybersecurity: as everyday sensors become more powerful and affordable, they also open up unanticipated doors to data leaks, transforming even the most trusted peripheral devices into surveillance-related tools. 

In light of the recent revelations regarding Mic-E-Mouse, it becomes increasingly evident that the advancement of consumer technology must be accompanied by a rigorous evolution in security awareness. As devices become smarter, faster, and more precise, they also become more susceptible to being misused in a way that is often undetected by conventional defense mechanisms. 

It is evident from the UC Irvine team's findings that it is essential for hardware designers, software developers, and cybersecurity experts to collaborate in order to establish new standards for sensor privacy and data governance. In addition to immediate measures, organizations should foster a culture of “peripheral hygiene,” whereby every connected device is treated as a potential data source that must be validated and controlled. 

By encouraging vendors to be transparent, integrating firmware-based safeguards, and educating users on emerging side-channel risks, it is possible to close the gap between innovation and exploitation. It is important to note that Mic-E-Mouse isn't just an isolated exploit—it is a warning shot signaling the very surface and sensors surrounding us have become a target of cybercrime. There is a thin line between performance and privacy, and vigilance rather than convenience should define the next phase of digital trust, since performance needs to be balanced against privacy.

AI Adoption Outpaces Cybersecurity Awareness as Users Share Sensitive Data with Chatbots

 

The global surge in the use of AI tools such as ChatGPT and Gemini is rapidly outpacing efforts to educate users about the cybersecurity risks these technologies pose, according to a new study. The research, conducted by the National Cybersecurity Alliance (NCA) in collaboration with cybersecurity firm CybNet, surveyed over 6,500 individuals across seven countries, including the United States. It found that 65% of respondents now use AI in their everyday lives—a 21% increase from last year—yet 58% said they had received no training from employers on the data privacy and security challenges associated with AI use. 

“People are embracing AI in their personal and professional lives faster than they are being educated on its risks,” said Lisa Plaggemier, Executive Director of the NCA. The study revealed that 43% of respondents admitted to sharing sensitive information, including company financial data and client records, with AI chatbots, often without realizing the potential consequences. The findings highlight a growing disconnect between AI adoption and cybersecurity preparedness, suggesting that many organizations are failing to educate employees on how to use these tools responsibly. 

The NCA-CybNet report aligns with previous warnings about the risks posed by AI systems. A survey by software company SailPoint earlier this year found that 96% of IT professionals believe AI agents pose a security risk, while 84% said their organizations had already begun deploying the technology. These AI agents—designed to automate tasks and improve efficiency—often require access to sensitive internal documents, databases, or systems, creating new vulnerabilities. When improperly secured, they can serve as entry points for hackers or even cause catastrophic internal errors, such as one case where an AI agent accidentally deleted an entire company database. 

Traditional chatbots also come with risks, particularly around data privacy. Despite assurances from companies, most chatbot interactions are stored and sometimes used for future model training, meaning they are not entirely private. This issue gained attention in 2023 when Samsung engineers accidentally leaked confidential data to ChatGPT, prompting the company to ban employee use of the chatbot. 

The integration of AI tools into mainstream software has only accelerated their ubiquity. Microsoft recently announced that AI agents will be embedded into Word, Excel, and PowerPoint, meaning millions of users may interact with AI daily—often without any specialized training in cybersecurity. As AI becomes an integral part of workplace tools, the potential for human error, unintentional data sharing, and exposure to security breaches increases. 

While the promise of AI continues to drive innovation, experts warn that its unchecked expansion poses significant security challenges. Without comprehensive training, clear policies, and safeguards in place, individuals and organizations risk turning powerful productivity tools into major sources of vulnerability. The race to integrate AI into every aspect of modern life is well underway—but for cybersecurity experts, the race to keep users informed and protected is still lagging far behind.

Spike in Login Portal Scans Puts Palo Alto Networks on Alert


 

The Palo Alto Networks login portals have seen a dramatic surge in suspicious scanning activity over the past month, a development that has caught the attention of the cybersecurity community. Evidence suggests that threat actors are trying to coordinate reconnaissance efforts aimed at the Palo Alto Networks login portals. 

A new report from cybersecurity intelligence firm GreyNoise revealed that Palo Alto Networks' GlobalProtect and PAN-OS interfaces saw an increase in scanning volumes of over 500%, which marks a sharp departure from the usual pattern for such scanning. In the last week of October, the firm recorded more than 1,285 unique IP addresses attempting to probe these systems - a sharp rise from the typical daily average of fewer than 200 that occurs on a regular basis. 

Approximately 80% of this activity was attributed to IP addresses in the United States, with additional clusters originating from IP addresses in the United Kingdom, the Netherlands, Canada, and Russia. Moreover, separate TLS fingerprints indicated that there were organised scanning clusters that were heavily oriented towards United States targets as well as Pakistani targets. 

A GreyNoise analyst classifies 91% of the observed IP addresses as suspicious, while the remaining 7% are suspected to be malicious, indicating this may represent an early phase of targeted reconnaissance or exploitation attempts against Palo Alto Networks' infrastructure that is widely deployed. 

A GreyNoise analysis revealed that a large portion of the scanning traffic originated from U.S. IP addresses, with smaller but noteworthy clusters originating from the United Kingdom, the Netherlands, Canada, and Russia, indicating the traffic originated primarily from the United States. Using TLS fingerprints, research identified distinct activity clusters – targeting foand cusing o and focusing on Pakistani systems, focusing, overlapping fingerprints, suggesting infrastructure or coordination. 

Ninety per cent of the IP addresses involved in the campaign were deemed suspicious, while another seven per cent were flagged as malicious by the firm. It has been observed that most scanning activity has been directed towards emulated Palo Alto Networks profiles, including GlobalProtect and PAN-OS, indicating that the probes were likely to be intentional and are the product of open-source scanning tools or attackers who are conducting reconnaissance efforts to identify vulnerable Palo Alto devices. 

According to GreyNoise, heightened scanning activity can often be detected before zero-day or zero-n-day vulnerabilities are exploited, acting as a warning to potential offensive operations well in advance. A similar pattern was observed earlier this year, as a spike in Cisco ASA scans followed shortly thereafter by the disclosure and exploitation of a critical zero-day vulnerability in that product line, which was a warning of potential offensive operations. 

Although the timing and scale of the current Palo Alto scans are cause for concern, researchers have clarified that the available evidence suggests a weak correlation with any known or emerging exploit activity at this point in the Palo Alto network ecosystem. Palo Alto Networks' GlobalProtect platform is the core of its next-generation firewall ecosystem, allowing organisations to implement consistent policies for threat prevention and security across remote endpoints, regardless of whether or not the endpoints are connected to a virtual network. 

GlobalProtect portals are critical management tools that enable administrators to customize VPN settings, distribute security agents, and oversee endpoint connectivity within enterprise networks by allowing them to configure VPN settings, distribute security agents, and manage endpoint connectivity. Due to its function and visibility on the Internet, the portal is considered a high-value target for attackers looking to access sensitive data. 

According to experts, firewalls, VPNs, and other edge-facing technologies are among the most attractive security tools for attackers because they act as gateways between internal corporate environments and the open internet as a whole. These systems, by necessity, are available online to support remote operations, but are inadvertently exposing themselves to extensive reconnaissance and scanning efforts as a result. 

A few weeks earlier, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) released a warning indicating that Palo Alto Networks would be actively exploited if it were to exploit a zero-day authentication bypass vulnerability in the company's PAN-OS software. This has increased Palo Alto Networks' appeal to cyber adversaries. As with other cyber threats, similar trends have been observed across the entire industry. 

For example, Cisco Talos disclosed last year that two zero-day flaws in Cisco firewall appliances were exploited by a state-backed threat actor to conduct an espionage campaign coordinated with Cisco. These risks highlight the persistence of the threats vendors are facing when it comes to edge security infrastructure vendors.

Among experts in the field of cybersecurity, it is very important to recognise that recent spikes in scanning activity targeting Palo Alto Networks' PAN-OS GlobalProtect gateways highlight a long-standing principle of cybersecurity: there is always a vulnerability in software. According to Boris Cipot, Senior Security Engineer at Black Duck, no matter how sophisticated a piece of software is, security vulnerabilities will inevitably arise at some point, whether due to programming oversight or the introduction of vulnerabilities by third-party open-source components. 

According to him, the real test is not whether a vulnerability exists but how swiftly the affected vendor releases a fix and how quickly the users apply the fix. The Palo Alto Networks spokesperson told me that while most Palo Alto Networks customers have probably patched their systems in response to recent advisories, attackers continue to hunt for devices that are not patched or poorly maintained, hoping that they can exploit those that are not well secured. 

Among Cipot's recommendations are to perform timely patching, follow vendor-recommended mitigations when patches are not available, and restrict management interfaces to trusted internal networks, which, he says, is also one of the most fundamental practices. 

The report also recommends that organisations use continuous log monitoring, conduct regular security audits, and analyse open-source components to identify vulnerabilities as early as possible in the lifecycle. A Salt Security director, Eric Schwake, who is responsible for cybersecurity strategy, expressed the concerns of these people by pointing out that the pattern of scans, which span nearly 24,000 unique IP addresses, demonstrates the persistence of threat actors in attempting to gain unauthorised access to data. 

While perimeter security, such as firewalls and VPNs, is still crucial, it should not be viewed as impenetrable, according to Schwake. As a result, he recommended organisations adopt a multi-layered security approach integrating API security governance, robust authentication mechanisms, and behavioural threat detection in order to detect abnormal login attempts as well as other malicious activities immediately in real time, as opposed to just relying on a single approach. 

Also, it was recommended that users be trained in user awareness, and multifactor authentication (MFA) should be enforced in order to reduce the risk of credential compromise and strengthen the overall cyber resilience of organisations. A GreyNoise security research team has noted unusual scanning activity directed at Palo Alto Networks’ PAN-OS GlobalProtect gateways for a number of years. 

In April 2025, the cybersecurity intelligence firm spotted another wave of suspicious login probes, resulting in Palo Alto Networks advising its customers to make sure that their systems are running the latest software versions and to apply all patches available to them. There are several patterns in GreyNoise’s Early Warning Signals report from July 2025 that support the company’s renewed warning. Among those patterns are large-scale spikes in malicious scanning, brute-force attempts, or exploit probing, which often follow a new CVE being disclosed within six weeks of the spike in those activities.

A similar pattern appeared to occur in early September 2025 when GreyNoise detected an increase in suspicious network scans targeting Cisco Adaptive Security Appliance (ASA) devices - traced back to late August. A total of 25,100 IP addresses were involved in the initial wave, primarily located in Brazil, Argentina, and the United States, with most originating from Brazil. 

Researchers at Palo Alto Networks have discovered what appears to be an alarming rise in the number of scanning sessions available on the Internet targeting a critical flaw in the software Palo Alto Networks GlobalProtect, identified as CVE-2024-3400. There is a high-severity vulnerability that affects one of the most widely deployed enterprise firewall solutions, allowing the creation of arbitrary files that can be weaponised in order to execute root privilege-based commands on the operating system.

By exploiting such vulnerabilities, attackers are able to gain complete control over affected devices, potentially resulting in the theft of sensitive data, the compromise of critical network functions, and even the disruption of critical network functions. In the last few weeks, analysts have noticed a significant increase in the probing attempts of this exploit, suggesting that threat actors have been actively incorporating it into their attack arsenals. 

The fact that GlobalProtect serves as a gateway to the internet in many corporate environments increases the risks associated with the flaw, which is remote and unauthenticated. A surge of malicious reconnaissance, according to analysts, could be the precursor to coordinated intrusion campaigns. This makes it imperative that organizations implement security patches as soon as possible, enforce access restrictions, and strengthen monitoring mechanisms across all perimeter defenses, as well as implement security patches as soon as possible.

Only weeks after the discovery of one of the exploitable zero-day vulnerabilities in its ASA products (CVE-2025-20333), Cisco confirmed that the other zero-day vulnerability in the same product (CVE-2025-2020362) was actively exploited, enabling advanced malware strains such as RayInitiator and LINE VIPER to be deployed in real-world attacks. 

In accordance with the data supplied by the Shadowserver Foundation, over 45,000 Cisco ASA and Firepower Threat Defence instances in the world, including more than 20,000 in the United States, remain susceptible to these vulnerabilities. It is evident that organisations reliant on perimeter security technologies face escalating threats and are faced with an ongoing challenge of timely patch adoption, as well as the escalating risks associated with them. 

This latest surge in scanning activity serves as yet another reminder that cyber threats are constantly evolving, and that is why maintaining vigilance, visibility, and velocity is so crucial in terms of defence against them. As reconnaissance efforts become more sophisticated and automated, organisations have to take more proactive steps - both in terms of integrating threat intelligence, continuously monitoring, and managing attack surfaces in order to remain effective. 

This cannot be done solely through vendor patches. It is imperative to combine endpoint hardening, strict access controls, timely updates, and intelligence anomaly detection based on behavioural analytics in order to strengthen network resilience today. It is also important for security teams to minimise the exposure of interfaces, and wherever possible, to shield them behind zero-trust architectures that validate every connection attempt with a zero-trust strategy. 

The use of regular penetration testing, as well as active participation in information-sharing communities, can make it much easier to detect early warning signs before adversaries gain traction. The attackers are ultimately playing the long game, as can be seen by the recurring campaigns against Palo Alto Networks and Cisco infrastructure – scanning for vulnerabilities, waiting for them to emerge, and then attacking when they become complacent. Defenders' edge lies, therefore, in staying informed, staying updated, and staying ahead of the curve: staying informed and staying updated.

Lost or Stolen Phone? Here’s How to Protect Your Data and Digital Identity

 



In this age, losing a phone can feel like losing control over your digital life. Modern smartphones carry far more than contacts and messages — they hold access to emails, bank accounts, calendars, social platforms, medical data, and cloud storage. In the wrong hands, such information can be exploited for financial fraud or identity theft.

Whether your phone is misplaced, stolen, or its whereabouts are unclear, acting quickly is the key to minimizing damage. The following steps outline how to respond immediately and secure your data before it is misused.


1. Track your phone using official recovery tools

Start by calling your number to see if it rings nearby or if someone answers. If not, use your device’s official tracking service. Apple users can access Find My iPhone via iCloud, while Android users can log in to Find My Device.

These built-in tools can display your phone’s current or last known location on a map, play a sound to help locate it, or show a custom message on the lock screen with your contact details. Both services can be used from another phone or a web browser. Avoid third-party tracking apps, which are often unreliable or insecure.


2. Secure your device remotely

If recovery seems unlikely or the phone may be in someone else’s possession, immediately lock it remotely. This prevents unauthorized access to your personal files, communication apps, and stored credentials.

Through iCloud’s “Mark as Lost” or Android’s “Secure Device” option, you can set a new passcode and display a message requesting the finder to contact you. This function also disables features like Apple Pay until the device is unlocked, protecting stored payment credentials.


3. Contact your mobile carrier without delay

Reach out to your mobile service provider to report the missing device. Ask them to suspend your SIM to block calls, texts, and data usage. This prevents unauthorized charges and, more importantly, stops criminals from intercepting two-factor authentication (2FA) messages that could give them access to other accounts.

Request that your carrier blacklist your device’s IMEI number. Once blacklisted, it cannot be used on most networks, even with a new SIM. If you have phone insurance, inquire about replacement or reimbursement options during the same call.


4. File an official police report

While law enforcement may not always track individual devices, filing a report creates an official record that can be used for insurance claims, fraud disputes, or identity theft investigations.

Provide details such as the model, color, IMEI number, and the time and place where it was lost or stolen. The IMEI (International Mobile Equipment Identity) can be found on your phone’s box, carrier account, or purchase receipt.


5. Protect accounts linked to your phone

Once the device is reported missing, shift your focus to securing connected accounts. Start with your primary email, cloud services, and social media platforms, as they often serve as gateways to other logins.

Change passwords immediately, and if available, sign out from all active sessions using the platform’s security settings. Apple, Google, and Microsoft provide account dashboards that allow you to remotely sign out of all devices.

Enable multi-factor authentication (MFA) on critical accounts if you haven’t already. This adds an additional layer of verification that doesn’t rely solely on your phone.

Monitor your accounts closely for unauthorized logins, suspicious purchases, or password reset attempts. These could signal that your data is being exploited.


6. Remove stored payment methods and alert financial institutions

If your phone had digital wallets such as Apple Pay, Google Pay, or other payment apps, remove linked cards immediately. Apple’s Find My will automatically disable Apple Pay when a device is marked as lost, but it’s wise to verify manually.

Android users can visit payments.google.com to remove cards associated with their Google account. Then, contact your bank or card issuer to flag the loss and monitor for fraudulent activity. Quick reporting allows banks to block suspicious charges or freeze affected accounts.


7. Erase your device permanently (only when recovery is impossible)

If all efforts fail and you’re certain the device won’t be recovered, initiate a remote wipe. This deletes all data, settings, and stored media, restoring the device to factory condition.

For iPhones, use the “Erase iPhone” option under Find My. For Androids, use “Erase Device” under Find My Device. Once wiped, you will no longer be able to track the device, but it ensures that your personal data cannot be accessed or resold.


Be proactive, not reactive

While these steps help mitigate damage, preparation remains the best defense. Regularly enable tracking services, back up your data, use strong passwords, and activate device encryption. Avoid storing sensitive files locally when possible and keep your operating system updated for the latest security patches.

Losing a phone is stressful, but being prepared can turn a potential disaster into a controlled situation. With the right precautions and quick action, you can safeguard both your device and your digital identity.



AI vs AI: Wiz CTO Warns of a New Threat Frontier

 

Artificial intelligence may be revolutionising business operations, but it is also transforming the battlefield of cybersecurity. “Cybersecurity has always been a mind game,” says Ami Luttwak, Chief Technologist at Wiz, in a recent conversation with TechCrunch’s Equity.

“Whenever a new technology wave appears, it opens new doors for attackers to exploit.” 

As organisations race to integrate AI into everything from coding and automation to AI-driven agents, the speed of innovation is inadvertently widening the attack surface. Developers are shipping products faster, but in doing so, they sometimes compromise on security hygiene, creating fresh entry points for malicious actors. 

Wiz, a leading cloud security firm recently acquired by Google for 32 billion dollars, conducted internal tests that revealed a recurring flaw in applications built with “vibe coding,” a term for natural language-driven coding using AI assistants. 

The flaw often appeared in how authentication systems were implemented. “It wasn’t because developers didn’t care about security,” Luttwak explains. “It’s because AI agents follow your instructions literally. If you don’t explicitly tell them to build something securely, they won’t.” 

The trade-off between speed and security is nothing new, but the rise of generative AI has raised the stakes. Attackers are no longer using only automated scripts or malware kits; they are using AI models themselves. “You can actually see the attacker using prompts to attack,” Luttwak notes. “They find AI tools in your system and instruct them to send sensitive data, delete files, or even erase entire machines.” 

Attackers are increasingly infiltrating AI tools deployed internally by companies to improve productivity, turning them into stepping stones for supply chain attacks. By breaching a third-party service with deep integration rights, they can move laterally within a corporate network. 

For example, Drift, an AI-powered marketing and sales chatbot provider, was breached last month, compromising the Salesforce data of major enterprises including Cloudflare, Google, and Palo Alto Networks. Hackers exploited authentication tokens to impersonate the chatbot, query sensitive records, and navigate deeper into client environments. 

“The attacker’s code was itself generated through vibe coding,” Luttwak reveals. AI in every stage of attack Although AI adoption in enterprises remains limited, Luttwak estimates that only about one percent of organisations have fully implemented it. Yet Wiz is already witnessing AI-driven attacks impacting thousands of businesses each week. “If you trace the flow of a modern attack, AI is embedded at nearly every stage,” he says. “This revolution is faster than any we have seen before, and the security industry needs to move even faster to keep up.” 

He cited another major incident, the “s1ingularity” attack on Nx, a popular JavaScript build system. In that case, the malware detected developer tools such as Claude and Gemini and hijacked them to automatically scan systems for confidential data. Thousands of tokens and private GitHub keys were compromised. 

Evolving Wiz for the AI era 

Founded in 2020, Wiz initially focused on identifying and fixing cloud misconfigurations and vulnerabilities. But with AI now central to both development and exploitation, the company has expanded its security capabilities. 

In September 2024, Wiz introduced Wiz Code, a tool designed to secure software from the earliest stages of development, ensuring applications are “secure by design.” In April 2025, it launched Wiz Defend, a runtime protection suite that detects and mitigates active threats within cloud environments. 

To Luttwak, these tools reflect a broader mission he calls “horizontal security”-- understanding a customer’s applications and workflows deeply enough to create adaptive defences. “We need to understand why you’re building something,” he says. “That’s how we create security tools that truly understand you.” 

Building secure startups from day one 

The growing number of AI startups promising enterprise-grade insights has also raised security concerns. Luttwak cautions businesses to be selective before sharing sensitive data with emerging SaaS vendors. Startups, he says, must embed a security-first mindset from the beginning. 

“From day one, you need to think about security and compliance. From day one, you need to have a CISO, even if your team only has five people.” 

He recalls Wiz’s early journey: “We were SOC 2 compliant before we even had code. And trust me, it’s much easier to do when you have five employees than when you have 500.” For startups serving enterprise clients, Luttwak says data architecture should be a top priority. 

“If you are an AI company working with enterprises, design your system so customer data remains in their environment.” This approach not only strengthens security but also builds trust, a crucial element in today’s AI economy. 

A new frontier for cybersecurity innovation 

Luttwak believes this is a defining moment for cybersecurity innovation. Every area from phishing protection and malware detection to endpoint security and workflow automation is being reshaped by AI. 

The next generation of startups, he says, will focus on “vibe security,” creating systems that use AI to defend against AI-powered threats. “The game is wide open,” he concludes. “If every part of security is now under attack, it means we have to rethink every part of security.”

Microsoft Cuts Unit 8200’s Cloud Access, Exposing Gaps in Israel’s Digital Sovereignty

 

An unprecedented development has rattled Israel’s national security establishment. Reports suggest that Microsoft has cut off access to certain Azure cloud and AI services used by the Israel Defense Forces’ elite intelligence branch, Unit 8200. The move follows allegations that these technologies were deployed for mass surveillance of Palestinians—an action Microsoft deems a breach of its terms of service.

While the ethics of surveillance in counterterrorism is a larger debate, the immediate concern lies in Israel’s exposure to external control over its most critical security systems. This was not an abstract policy dispute—it was a switch flipped in Seattle that disrupted operations in Tel Aviv.

Circle One: Intelligence Systems in Jeopardy

The first impact was felt in signals intelligence. Big data analysis and AI-driven monitoring tools are indispensable for Unit 8200’s operations. Yet, reliance on cloud vendors means these systems can be disabled at any moment if deemed to violate “human rights.” In effect, Israel’s intelligence could be blinded not by hostile interference but by corporate compliance officers abroad.

Circle Two: Command and Control Risks

Operational systems—including command dashboards, encrypted communications, and battlefield simulations—also depend on commercial cloud providers. If Microsoft or Google were to restrict access to machine-learning tools used for logistics or targeting, military synchronization could collapse in real time. A decision made on America’s West Coast could affect soldiers on Israel’s frontlines.

Circle Three: The Nimbus Project Illusion

Officials have pointed to Project Nimbus as a safeguard, promising sovereign control via local data centers. But Nimbus infrastructure still runs on Google and AWS, subject to their “acceptable use” policies. Even with servers in Israel, providers retain the power to suspend operations if they view applications like checkpoint facial recognition as misuse. True sovereignty remains out of reach.

Circle Four: Civilian Security Systems

Beyond the military, hybrid civilian-defense platforms are equally exposed. Airport facial recognition, biometric border checks, and emergency alert apps all depend on global tech ecosystems. A single platform decision—such as Google Play disabling a national alert system—could compromise civilian safety during crises.

Israel’s pursuit of efficiency and scale through Big Tech outsourcing has come at the cost of sovereignty. With vital systems tied to foreign providers, the country’s security infrastructure is vulnerable to decisions made thousands of miles away. Unless Israel develops genuine sovereign data capabilities or enforces unbreakable contractual guarantees, the “kill switch” for its most critical defenses will remain in the hands of multinational corporations.

Google Warns of Cl0p Extortion Campaign Against Oracle E-Business Users

 

Google Mandiant and the Google Threat Intelligence Group are tracking a suspected extortion campaign by the Cl0p ransomware group targeting executives with claims of stealing Oracle E-Business Suite data. 

The hackers have demanded ransoms reaching up to $50 million, with cybersecurity firm Halcyon reporting multiple seven and eight-figure ransom demands in recent days. The group claims to have breached Oracle's E-Business Suite, which manages core operations including financial, supply chain, and customer relationship management functions.

Modus operandi 

The attackers reportedly hacked user emails and exploited Oracle E-Business Suite's default password reset functionality to steal valid credentials. This technique bypassed single sign-on protections due to the lack of multi-factor authentication on local Oracle accounts. At least one company has confirmed that data from their Oracle systems was stolen, according to sources familiar with the matter. The hackers provided proof of compromise to victims, including screenshots and file trees.

This activity began on or before September 29, 2025, though Mandiant experts remain in early investigation stages and have not yet substantiated all claims made by the group. Charles Carmakal, Mandiant's CTO, described the operation as a high-volume email campaign launched from hundreds of compromised accounts. Initial analysis confirms at least one compromised account previously associated with FIN11, a long-running financially motivated threat group known for deploying ransomware and engaging in extortion.

Threat actor background 

Since August 2020, FIN11 has targeted organizations across multiple industries including defense, energy, finance, healthcare, legal, pharmaceutical, telecommunications, technology, and transportation. The group is believed to operate from Commonwealth of Independent States countries, with Russian-language file metadata found in their malware code. In 2020, Mandiant observed FIN11 hackers using spear-phishing messages to distribute a malware downloader called FRIENDSPEAK.

An email address in the extortion notes ties to a Cl0p affiliate and includes Cl0p site contacts, though Google lacks definitive proof to confirm the attackers' claims. The malicious emails contain contact information verified as publicly listed on the Cl0p data leak site, strongly suggesting association with Cl0p and leveraging their brand recognition. Cl0p has launched major attacks in recent years exploiting zero-day flaws in popular software including Accellion, SolarWinds, Fortra GoAnywhere, and MOVEit.

Security recommendations

Oracle confirmed the investigation on October 3, 2025, stating that attacks potentially relate to critical vulnerabilities disclosed in their July 2025 Critical Patch Update. The company strongly encouraged customers to review the July update and patch their systems for protection. Mandiant researchers recommend investigating environments for indicators of compromise associated with Cl0p operations.

The Price of Cyberattacks: Why the Real Damage Goes Beyond the Ransom

 




When news breaks about a cyberattack, the ransom demand often steals the spotlight. It’s the most visible figure, millions demanded, negotiations unfolding, and sometimes, payment made. But in truth, that amount only scratches the surface. The real costs of a cyber incident often emerge long after the headlines fade, in the form of business disruptions, shaken trust, legal pressures, and a long, difficult road to recovery.

One of the most common problems organizations face after a breach is the communication gap between technical experts and senior leadership. While the cybersecurity team focuses on containing the attack, tracing its source, and preserving evidence, the executives are under pressure to reassure clients, restore operations, and navigate complex reporting requirements.

Each group works with valid priorities, but without coordination, efforts can collide. A system that’s isolated for forensic investigation may also be the one that the operations team needs to serve customers. This misalignment is avoidable if organizations plan beyond technology by assigning clear responsibilities across departments and conducting regular crisis simulations to ensure a unified response when an attack hits.

When systems go offline, the impact ripples across every department. A single infected server can halt manufacturing lines, delay financial transactions, or force hospitals to revert to manual record-keeping. Even after the breach is contained, lost time translates into lost revenue and strained customer relationships.

Many companies underestimate downtime in their recovery strategies. Backup plans often focus on restoring data, but not on sustaining operations during outages. Every organization should ask: Can employees access essential tools if systems are locked? Can management make decisions without their usual dashboards? If those answers are uncertain, then the recovery plan is incomplete.

Beyond financial loss, cyber incidents leave a lasting mark on reputation. Customers and partners may begin to question whether their information is safe. Rebuilding that trust requires transparent, timely, and fact-based communication. Sharing too much before confirming the facts can create confusion; saying too little can appear evasive.

Recovery also depends on how well a company understands its data environment. If logs are incomplete or investigations are slow, regaining credibility becomes even harder. The most effective organizations balance honesty with precision, updating stakeholders as verified information becomes available.

The legal consequences of a cyber incident often extend further than companies expect. Even if a business does not directly store consumer data, it may still have obligations under privacy laws, vendor contracts, or insurance terms. State and international regulations increasingly require timely disclosure of breaches, and failing to comply can result in penalties.

Engaging legal and compliance teams before a crisis ensures that everyone understands the organization’s obligations and can act quickly under pressure.

Cybersecurity is no longer just an IT issue; it’s a core business concern. Effective protection depends on organization-wide preparedness. That means bridging gaps between departments, creating holistic response plans that include legal and communication teams, and regularly testing how those plans perform under real-world pressure.

Businesses that focus on resilience, not just recovery, are better positioned to minimize disruption, maintain trust, and recover faster if a cyber incident occurs.



Global Supply Chains at Risk as Indian Third-Party Suppliers Face Rising Cybersecurity Breaches

 

Global supply chains face growing cybersecurity risks as research highlights vulnerabilities in Indian third-party suppliers. According to a recent report by risk management firm SecurityScorecard, more than half of surveyed suppliers in India experienced breaches last year, raising concerns about cascading effects on international businesses. The study examined security postures across multiple sectors, including manufacturing for aerospace and pharmaceuticals, as well as IT service providers. 

The findings suggest that security weaknesses among Indian suppliers are both more widespread and severe than analysts initially anticipated. These vulnerabilities could create a domino effect, exposing global companies that rely on Indian vendors to significant cyber threats. Despite the generally strong security posture of Indian IT service providers, they recorded the highest number of breaches in the study, underscoring their position as prime targets for attackers. 

SecurityScorecard noted that IT service providers worldwide face heightened cyber risks due to their central role in enabling third-party access, their expansive attack surfaces, and their value as high-profile targets. In India, IT companies were found to be particularly vulnerable to typosquatting domains, compromised credentials, and infected devices. The research further revealed that suppliers of outsourced IT operations and managed services were linked to 62.5% of all documented third-party breaches in the country—the highest proportion the company has ever recorded. 

Given India’s dominant role in the global IT services market, the implications are profound. Multinational corporations across industries rely heavily on Indian IT vendors, making them critical nodes in the international digital economy. “India is a cornerstone of the global digital economy,” said Ryan Sherstobitoff, Field Chief Threat Intelligence Officer at SecurityScorecard. “Our findings highlight both strong performance and areas where resilience must improve. Supply chain security is now an operational requirement.” 

The report also emphasized the risks of “fourth-party” vulnerabilities, where the suppliers of Indian companies themselves create additional points of weakness. A single ransomware attack or disruptive incident against an Indian vendor, the researchers warned, could halt manufacturing, delay service delivery, or disrupt logistics across multiple countries. 

The risks are not limited to India. A separate SecurityScorecard study revealed that 96% of Europe’s largest financial institutions have been affected by a breach at a third-party supplier, while 97% reported breaches stemming from fourth-party partners, a sharp increase from 84% two years earlier. 

As global supply chains become increasingly interconnected, these findings highlight the urgent need for businesses to strengthen third-party risk management and enforce stricter cybersecurity practices across their vendor ecosystems. Without stronger safeguards, both direct and indirect supplier vulnerabilities could leave multinational enterprises exposed to significant financial and operational disruptions.